92 research outputs found

    Neuromorphic auditory computing: towards a digital, event-based implementation of the hearing sense for robotics

    Get PDF
    In this work, it is intended to advance on the development of the neuromorphic audio processing systems in robots through the implementation of an open-source neuromorphic cochlea, event-based models of primary auditory nuclei, and their potential use for real-time robotics applications. First, the main gaps when working with neuromorphic cochleae were identified. Among them, the accessibility and usability of such sensors can be considered as a critical aspect. Silicon cochleae could not be as flexible as desired for some applications. However, FPGA-based sensors can be considered as an alternative for fast prototyping and proof-of-concept applications. Therefore, a software tool was implemented for generating open-source, user-configurable Neuromorphic Auditory Sensor models that can be deployed in any FPGA, removing the aforementioned barriers for the neuromorphic research community. Next, the biological principles of the animals' auditory system were studied with the aim of continuing the development of the Neuromorphic Auditory Sensor. More specifically, the principles of binaural hearing were deeply studied for implementing event-based models to perform real-time sound source localization tasks. Two different approaches were followed to extract inter-aural time differences from event-based auditory signals. On the one hand, a digital, event-based design of the Jeffress model was implemented. On the other hand, a novel digital implementation of the Time Difference Encoder model was designed and implemented on FPGA. Finally, three different robotic platforms were used for evaluating the performance of the proposed real-time neuromorphic audio processing architectures. An audio-guided central pattern generator was used to control a hexapod robot in real-time using spiking neural networks on SpiNNaker. Then, a sensory integration application was implemented combining sound source localization and obstacle avoidance for autonomous robots navigation. Lastly, the Neuromorphic Auditory Sensor was integrated within the iCub robotic platform, being the first time that an event-based cochlea is used in a humanoid robot. Then, the conclusions obtained are presented and new features and improvements are proposed for future works.En este trabajo se pretende avanzar en el desarrollo de los sistemas de procesamiento de audio neuromórficos en robots a través de la implementación de una cóclea neuromórfica de código abierto, modelos basados en eventos de los núcleos auditivos primarios, y su potencial uso para aplicaciones de robótica en tiempo real. En primer lugar, se identificaron los principales problemas a la hora de trabajar con cócleas neuromórficas. Entre ellos, la accesibilidad y usabilidad de dichos sensores puede considerarse un aspecto crítico. Los circuitos integrados analógicos que implementan modelos cocleares pueden no pueden ser tan flexibles como se desea para algunas aplicaciones específicas. Sin embargo, los sensores basados en FPGA pueden considerarse una alternativa para el desarrollo rápido y flexible de prototipos y aplicaciones de prueba de concepto. Por lo tanto, en este trabajo se implementó una herramienta de software para generar modelos de sensores auditivos neuromórficos de código abierto y configurables por el usuario, que pueden desplegarse en cualquier FPGA, eliminando las barreras mencionadas para la comunidad de investigación neuromórfica. A continuación, se estudiaron los principios biológicos del sistema auditivo de los animales con el objetivo de continuar con el desarrollo del Sensor Auditivo Neuromórfico (NAS). Más concretamente, se estudiaron en profundidad los principios de la audición binaural con el fin de implementar modelos basados en eventos para realizar tareas de localización de fuentes sonoras en tiempo real. Se siguieron dos enfoques diferentes para extraer las diferencias temporales interaurales de las señales auditivas basadas en eventos. Por un lado, se implementó un diseño digital basado en eventos del modelo Jeffress. Por otro lado, se diseñó una novedosa implementación digital del modelo de codificador de diferencias temporales y se implementó en FPGA. Por último, se utilizaron tres plataformas robóticas diferentes para evaluar el rendimiento de las arquitecturas de procesamiento de audio neuromórfico en tiempo real propuestas. Se utilizó un generador central de patrones guiado por audio para controlar un robot hexápodo en tiempo real utilizando redes neuronales pulsantes en SpiNNaker. A continuación, se implementó una aplicación de integración sensorial que combina la localización de fuentes de sonido y la evitación de obstáculos para la navegación de robots autónomos. Por último, se integró el Sensor Auditivo Neuromórfico dentro de la plataforma robótica iCub, siendo la primera vez que se utiliza una cóclea basada en eventos en un robot humanoide. Por último, en este trabajo se presentan las conclusiones obtenidas y se proponen nuevas funcionalidades y mejoras para futuros trabajos

    NeuroPod: a real-time neuromorphic spiking CPG applied to robotics

    Get PDF
    Initially, robots were developed with the aim of making our life easier, carrying out repetitive or dangerous tasks for humans. Although they were able to perform these tasks, the latest generation of robots are being designed to take a step further, by performing more complex tasks that have been carried out by smart animals or humans up to date. To this end, inspiration needs to be taken from biological examples. For instance, insects are able to optimally solve complex environment navigation problems, and many researchers have started to mimic how these insects behave. Recent interest in neuromorphic engineering has motivated us to present a real-time, neuromorphic, spike-based Central Pattern Generator of application in neurorobotics, using an arthropod-like robot. A Spiking Neural Network was designed and implemented on SpiNNaker. The network models a complex, online-change capable Central Pattern Generator which generates three gaits for a hexapod robot locomotion. Recon gurable hardware was used to manage both the motors of the robot and the real-time communication interface with the Spiking Neural Networks. Real-time measurements con rm the simulation results, and locomotion tests show that NeuroPod can perform the gaits without any balance loss or added delay.Ministerio de Economía y Competitividad TEC2016-77785-

    Spiking row-by-row FPGA Multi-kernel and Multi-layer Convolution Processor.

    Get PDF
    Spiking convolutional neural networks have become a novel approach for machine vision tasks, due to the latency to process an input stimulus from a scene, and the low power consumption of these kind of solutions. Event-based systems only perform sum operations instead of sum of products of framebased systems. In this work an upgrade of a neuromorphic event-based convolution accelerator for SCNN, which is able to perform multiple layers with different kernel sizes, is presented. The system has a latency per layer from 1.44 μs to 9.98μs for kernel sizes from 1x1 to 7x7

    Live Demonstration: neuromorphic robotics, from audio to locomotion through spiking CPG on SpiNNaker.

    Get PDF
    This live demonstration presents an audio-guided neuromorphic robot: from a Neuromorphic Auditory Sensor (NAS) to locomotion using Spiking Central Pattern Generators (sCPGs). Several gaits are generated by sCPGs implemented on a SpiNNaker board. The output of these sCPGs is sent in a real-time manner to an Field Programmable Gate Array (FPGA) board using an AER-to-SpiNN interface. The control of the hexapod robot joints is performed by the FPGA board. The robot behavior can be changed in real-time by means of the NAS. The audio information is sent to the SpiNNaker board which classifies it using a Spiking Neural Network (SNN). Thus, the input sound will activate a specific gait pattern which will eventually modify the behavior of the robot.Ministerio de Economía y Competitividad TEC2016-77785-

    Deep Spiking Neural Network model for time-variant signals classification: a real-time speech recognition approach

    Get PDF
    Speech recognition has become an important task to improve the human-machine interface. Taking into account the limitations of current automatic speech recognition systems, like non-real time cloud-based solutions or power demand, recent interest for neural networks and bio-inspired systems has motivated the implementation of new techniques. Among them, a combination of spiking neural networks and neuromorphic auditory sensors offer an alternative to carry out the human-like speech processing task. In this approach, a spiking convolutional neural network model was implemented, in which the weights of connections were calculated by training a convolutional neural network with specific activation functions, using firing rate-based static images with the spiking information obtained from a neuromorphic cochlea. The system was trained and tested with a large dataset that contains ”left” and ”right” speech commands, achieving 89.90% accuracy. A novel spiking neural network model has been proposed to adapt the network that has been trained with static images to a non-static processing approach, making it possible to classify audio signals and time series in real time.Ministerio de Economía y Competitividad TEC2016-77785-

    Live Demonstration: Neuromorphic Row-by-Row Multi-convolution FPGA Processor-SpiNNaker architecture for Dynamic-Vision Feature Extraction

    Get PDF
    In this demonstration a spiking neural network architecture for vision recognition using an FPGA spiking convolution processor, based on leaky integrate and fire neurons (LIF) and a SpiNNaker board is presented. The network has been trained with Poker-DVS dataset in order to classify the four different card symbols. The spiking convolution processor extracts features from images in form of spikes, computes by one layer of 64 convolutions. These features are sent to an OKAERtool board that converts from AER to 2-7 protocol to be classified by a spiking neural network deployed on a SpiNNaker platform

    Interfacing PDM sensors with PFM spiking systems: application for Neuromorphic Auditory Sensors

    Get PDF
    In this paper we present a sub-system to convert audio information from low-power MEMS microphones with pulse density modulation (PDM) output into rate coded spike streams. These spikes represent the input signal of a Neuromorphic Auditory Sensor (NAS), which is implemented with Spike Signal Processing (SSP) building blocks. For this conversion, we have designed a HDL component for FPGA able to interface with PDM microphones and converts their pulses to temporal distributed spikes following a pulse frequency modulation (PFM) scheme with an accurate configurable Inter-Spike-Interval. The new FPGA component has been tested in two scenarios, first as a stand-alone circuit for its characterization, and then it has been integrated with a full NAS design to verify its behavior. This PDM interface demands less than 1% of a Spartan 6 FPGA resources and has a power consumption below 5mW.Ministerio de Economía y Competitividad TEC2016-77785-

    Low-Power Embedded System for Gait Classification Using Neural Networks

    Get PDF
    Abnormal foot postures can be measured during the march by plantar pressures in both dynamic and static conditions. These detections may prevent possible injuries to the lower limbs like fractures, ankle sprain or plantar fasciitis. This information can be obtained by an embedded instrumented insole with pressure sensors and a low-power microcontroller. However, these sensors are placed in sparse locations inside the insole, so it is not easy to correlate manually its values with the gait type; that is why a machine learning system is needed. In this work, we analyse the feasibility of integrating a machine learning classifier inside a low-power embedded system in order to obtain information from the user’s gait in real-time and prevent future injuries. Moreover, we analyse the execution times, the power consumption and the model effectiveness. The machine learning classifier is trained using an acquired dataset of 3000+ steps from 6 different users. Results prove that this system provides an accuracy over 99% and the power consumption tests obtains a battery autonomy over 25 days

    Performance Evaluation of Neural Networks for Animal Behaviors Classification: Horse Gaits Case Study

    Get PDF
    The study and monitoring of wildlife has always been a subject of great interest. Studying the behavior of wildlife animals is a very complex task due to the difficulties to track them and classify their behaviors through the collected sensory information. Novel technology allows designing low cost systems that facilitate these tasks. There are currently some commercial solutions to this problem; however, it is not possible to obtain a highly accurate classification due to the lack of gathered information. In this work, we propose an animal behavior recognition, classification and monitoring system based on a smart collar device provided with inertial sensors and a feed-forward neural network or Multi-Layer Perceptron (MLP) to classify the possible animal behavior based on the collected sensory information. Experimental results over horse gaits case study show that the recognition system achieves an accuracy of up to 95.6%.Junta de Andalucía P12-TIC-130
    corecore